K-hyperplane Hinge-Minimax Classifier

نویسندگان

  • Margarita Osadchy
  • Tamir Hazan
  • Daniel Keren
چکیده

We explore a novel approach to upper bound the misclassification error for problems with data comprising a small number of positive samples and a large number of negative samples. We assign the hinge-loss to upper bound the misclassification error of the positive examples and use the minimax risk to upper bound the misclassification error with respect to the worst case distribution that generates the negative examples. This approach is computationally appealing since the majority of training examples (belonging to the negative class) are represented by the statistics of their distribution, in contrast to kernel SVM which produces a very large number of support vectors in such settings. We derive empirical risk bounds for linear and non-linear classification and show that they are dimensionally independent and decay as 1/ √ m for m samples. We propose an efficient algorithm for training an intersection of finite number of hyperplanes and demonstrate its effectiveness on real data, including letter and scene recognition.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Latent Hinge-Minimax Risk Minimization for Inference from a Small Number of Training Samples

Deep Learning (DL) methods show very good performance when trained on large, balanced data sets. However, many practical problems involve imbalanced data sets, or/and classes with a small number of training samples. The performance of DL methods as well as more traditional classifiers drops significantly in such settings. Most of the existing solutions for imbalanced problems focus on customizi...

متن کامل

Error bound for Slope SVM in High Dimension

In this paper, we propose a new estimator: the Slope SVM, which minimizes the hinge loss with the Slope penalization introduced by [3]. We study the asymptotical behavior of the `2 error between the theoretical hinge loss minimizer and the Slope estimator. We prove Slope achieves a (k/n) log(p/k) rate with high probability and in expectation under the Weighted Restricted Eigenvalue Condition. T...

متن کامل

AUC Maximization with K-hyperplane

The area under the ROC curve (AUC) is a measure of interest in various machine learning and data mining applications. It has been widely used to evaluate classification performance on heavily imbalanced data. The kernelized AUC maximization machines have established a superior generalization ability compared to linear AUC machines because of their capability in modeling the complex nonlinear st...

متن کامل

Hinging Hyperplanes for Non-Linear Identi cation

The hinging hyperplane method is an elegant and eecient way of identifying piecewise linear models based on the data collected from an unknown linear or nonlinear system. This approach provides \a powerful and eecient alternative to neural networks with computing times several orders of magnitude less than tting neural networks with a comparable number of parameters", as stated in 3]. In this r...

متن کامل

An Interpretable SVM Classifier by Discrete-Weight Optimization

The main problem investigated in this paper is to learn ”interpretable” linear classifiers from data. Interpretable models are captured using ”discrete” linear functions. The learning problem is formulated as minimizing the cumulative zero-one loss of a discrete hyperplane, penalized by the standard L2 regularizer. This learning task is cast as a MILP problem, and solved using convex relaxation...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015